摘要 :
Data stores and cloud services are typically accessed using a client-server paradigm wherein the client runs as part of an application process which is trying to access the data store or cloud service. This paper presents the desi...
展开
Data stores and cloud services are typically accessed using a client-server paradigm wherein the client runs as part of an application process which is trying to access the data store or cloud service. This paper presents the design and implementation of enhanced clients for improving both the functionality and performance of applications accessing data stores or cloud services. Our enhanced clients can improve performance via multiple types of caches, encrypt data for providing confidentiality before sending information to a server, and compress data for reducing the size of data transfers. Our clients can perform data analysis to allow applications to more effectively use cloud services. They also provide both synchronous and asynchronous interfaces. An asynchronous interface allows an application program to access a data store or cloud service and continue execution before receiving a response which can significantly improve performance. We present a Universal Data Store Manager (UDSM) which allows an application to access multiple different data stores and provides a common interface to each data store. The UDSM also can monitor the performance of different data stores. A workload generator allows users to easily determine and compare the performance of different data stores. We also present NLU-SA, an application for performing natural language understanding and sentiment analysis on text documents. NLU-SA is implemented on top of our enhanced clients and integrates text analysis with Web searching. We present results from NLU-SA on sentiment on the Web towards major companies and countries. We also present a performance analysis of our enhanced clients.
收起
摘要 :
Emerging distributed query-processing systems support flexible execution strategies in which earch query can be run using a combination of data shipping and query shipping. As in any distributed environment, these systems can obta...
展开
Emerging distributed query-processing systems support flexible execution strategies in which earch query can be run using a combination of data shipping and query shipping. As in any distributed environment, these systems can obtain tremendous performance and availability benefits by employing dynamic data caching. When flexible execution and dynamic caching are combined, however, a circular dependency arises: Caching occurs as a by-product of query operator placement, but query operator placement decisions are base don (cached) data location. The practical impact of this dependency is that query optimization decisions that appear valid on a per-query basis can actually cause suboptimal performance for all queries in the long run.
收起
摘要 :
Caching of frequently accessed data has been shown to be a useful technique for reducing congestion on the narrow bandwidth of wireless channels. However, traditional client/server strategies for supporting transactional cache con...
展开
Caching of frequently accessed data has been shown to be a useful technique for reducing congestion on the narrow bandwidth of wireless channels. However, traditional client/server strategies for supporting transactional cache consistency, which require extensive communications between a client and a server, are not appropriate in a wireless mobile database. This paper proposes two, simple but effective, transactional cache consistency protocols for mobile read-only transactions by utilizing the broadcast-based solutions for the problem of invalidating caches. The novelty of our approach is that the consistency check on accessed data and the commitment protocol are implemented in a truly distributed fashion as an integral part of cache invalidation process. The applicability of proposed techniques is also examined by an analytical study.
收起
摘要 :
Advances in computing and communication technologies have resulted in a wide variety of networked mobile devices that access data over the Internet. In this paper, we argue that servers by themselves may not be able to handle this...
展开
Advances in computing and communication technologies have resulted in a wide variety of networked mobile devices that access data over the Internet. In this paper, we argue that servers by themselves may not be able to handle this diversity in client characteristics and so intermediaries, such as proxies, should be employed to handle the mismatch between the server-supplied data and the client capabilities. Since existing proxies are primarily designed to handle traditional wired hosts, such proxy architectures will need to be enhanced to handle mobile devices. We propose such an enhanced proxy architecture that is capable of handling the heterogeneity in client needs―specifically the variations in client bandwidth and display capabilities. Our architecture combines transcoding (which is used to match the fidelity of the requested object to client capabilities) and caching (which is used to reduce the latency for accessing popular objects). Proxies that Transcode and Cache, PTCs, intelligently adapt to prevailing system conditions using learning techniques to decide whether to transcode locally or fetch an appropriate version from the server. Our experimental results indicate that the use of PTCs produces significant improvements in the client response times. We show that such results hold true for a variety of data content types like images and video data. Further, we find that even simple learning techniques can lead to significant performance improvements.
收起
摘要 :
Because of the evolution of cloud computing techniques, web browsing becomes the main function for persons using computers. Researchers are then devoted to the study of the attack characteristics for delaminating the attacks via W...
展开
Because of the evolution of cloud computing techniques, web browsing becomes the main function for persons using computers. Researchers are then devoted to the study of the attack characteristics for delaminating the attacks via Web connections. Because these defense technologies on the server side have great progress, attackers therefore target to the other side, the client side of personal computers. Drive‐by download is the most commonly used method to attack client computers, which exploits software vulnerabilities on the computers to secretly download and install malware. That is used to collect information on the affected computers or take those as hops to the next victims. Because of the protection technology improved and user awareness educated, attackers developed a new variety type of attacks named ‘drive‐by cache’ that has different attack steps and counterfeit file extension from the original ‘drive‐by download’. The changes elude those existed detection methods. This paper proposed a depth defense scheme to detect and prevent this type of attacks via collaboratively many existing solutions in phases. Copyright ? 2014 John Wiley & Sons, Ltd. A depth defense scheme to detect and prevent Drive‐by download attacks via collaboratively many existing solutions in five‐level phases.
收起
摘要 :
In mobile client-server database systems, caching of frequently accessed data is an important technique that will reduce contention on the narrow bandwidth wireless channel. As the server in mobile environments may not have any in...
展开
In mobile client-server database systems, caching of frequently accessed data is an important technique that will reduce contention on the narrow bandwidth wireless channel. As the server in mobile environments may not have any information about the state of its clients' cache (stateless server),using broadcasting approach to transmit the updated data lists to numerous concurrent mobile clients is an attractive approach.
收起
摘要 :
A cache stores data in order to serve future requests to those data faster. In mobile devices, the data have to be transferred from a server through the mobile network before being stored in the cache. The mobile network is prone ...
展开
A cache stores data in order to serve future requests to those data faster. In mobile devices, the data have to be transferred from a server through the mobile network before being stored in the cache. The mobile network is prone to failure caused by users' movements and by the placement of base transceiver stations. Moreover, the mobile devices use various telecommunications technologies and therefore the speed of the network is highly variable. Using a cellular network for communication is also expensive. The cache is an intermediate component which addresses this problem. Once the data are downloaded, they can be stored in the cache for possible future reuse. When using a cache, the system designer presumes that the data will be requested again in the future. On the other hand, the original data stored on the server can be changed. Then, the cached data are in an inconsistent state. In this paper, authors present an adaptive method for maintaining the consistency of cached data which saves network traffic by reducing the number of messages needed for inconsistency detection.
收起
摘要 :
Proxy-caching strategies, especially prefix caching and interval caching, are commonly used in video-on-demand (VOD) systems to improve both the system performance and the playback experience of users. However, because these cachi...
展开
Proxy-caching strategies, especially prefix caching and interval caching, are commonly used in video-on-demand (VOD) systems to improve both the system performance and the playback experience of users. However, because these caching strategies are designed for homogeneous clients, they do not perform well in the real world where clients are heterogeneous (i.e., different available network bandwidths and different sizes of client-side buffers). This paper investigates the problems caused by heterogeneous client-side buffers. We analyze the theoretical performance of these caching strategies, and then, derive cost functions to measure the corresponding performance gains. Based on these analytical results, we develop a caching strategy that employs both prefix caching and interval caching to minimize the input bandwidth of a proxy. The simulation results demonstrate that the bandwidth requirements of a proxy implementing our caching strategy are significantly lower compared to adopting prefix caching or interval caching alone.
收起
摘要 :
The real-time visualization of 3D GIS at a whole city scale always faces the challenge of dynamic data loading with high-efficiency. Based on the multi-tier distributed 3D GIS framework, this paper presents a multi-level cache app...
展开
The real-time visualization of 3D GIS at a whole city scale always faces the challenge of dynamic data loading with high-efficiency. Based on the multi-tier distributed 3D GIS framework, this paper presents a multi-level cache approach for dynamic data loading. It aims to establish in 3D GIS spatial database engine (3DGIS-SDE) the unified management mechanism of caches on three levels, including: the client memory cache (CMC) oriented to sharing application, the client file cache (CFC) organized by index, as well as the application server memory cache (ASMC) of structural consistency. With the help of the proposed optimized cache replacement policy, multi-level cache consistency maintenance as well as multithread loading model designed in the paper, the engine is able to adaptively make full use of each-level caches according to their own application properties and achieve effective coordination between them. Finally, a practical 3D GIS database based on Oracle 11g is employed for test. The experimental results prove this approach could satisfy multi-user concurrent applications of 3D visual exploration.
收起
摘要 :
The efficiency of algorithms managing data caches has a major impact on the performance of systems that utilize client-side data caching. In these systems, two versions of data can be maintained without additional overhead by expl...
展开
The efficiency of algorithms managing data caches has a major impact on the performance of systems that utilize client-side data caching. In these systems, two versions of data can be maintained without additional overhead by exploiting the replication of data in the server's buffer and clients' caches. In this paper, we present a new cache consistency algorithm employing versions: Two Versions-Callback Locking (2V-CBL). Our experimental results indicate that 2V-CBL provides good performance, and in particular outperforms a leading cache consistency algorithm, Asynchronous Avoidance-based Cache Consistency, when some clients run only read-only transactions.
收起